Instances generation

This sections show the processing times values for the generated instances.

The plot below shows the processing times density plot for each of the generated distributions.

The plot below shows the processing times for two machines on instances with different correlation values.

Algorithm

Algoritm parameter details:

param type values
IOR o 0 0.25 0.5 0.75 1
IOI c sum_pij dev_pij avgdev_pij abs_dif ss_sra ss_srs ss_srn_rcn ss_sra_rcn ss_srs_rcn ss_sra_2rcn ra_c1 ra_c2 ra_c3 lr_it_aj_ct lr_it_ct lr_it lr_aj lr_ct kk1 kk2 nm
IOW c no yes
IOS c incr decr hill valley hi_hilo hi_lohi lo_hilo lo_lohi
NOI c sum_pij dev_pij avgdev_pij abs_dif ss_sra ss_srs ss_srn_rcn ss_sra_rcn ss_srs_rcn ss_sra_2rcn ra_c1 ra_c2 ra_c3 lr_it_ct lr_it lr_aj lr_ct kk1 kk2 nm
NOS c incr decr hill valley hi_hilo hi_lohi lo_hilo lo_lohi
NOW c no yes
NTB c first_best last_best kk1 kk2 nm1

Features

The plots below show the histograms for all features colored by: problem type, objective, number of jobs, number of machines and correlation type (for generated instances).

Best parameters

The sections below show the distributions of the best values for each parameters colored by the objective, problem type, number of jobs and number of machines.

Parameter values by objective

Parameter values by problem type

Parameter values by number of jobs

Parameter values by number of machines

Recommendation models

Accuracy for each recommendation model and parameter:

F-score for each recommendation model and parameter:

Decision tree models

The plots below show the decision trees generated for each parameter task.

Bellow are all the decision trees for the recommendation tasks considering parameter dependencies:

Decision trees variable importance

The bar plots below show, for each parameter, the variable importance for the decision tree model without dependencies:

For the models including parameter dependencies, the variances importance are shown below:

Models optimization performance

The optimization performance considered the average relative performance \((perf - best\_perf) / best\_perf\).

Th quantiles for the performance values for each model are shown in the table below.

model_strat q00 q25 q50 q75 q100
Random -9.319826 1.0872599 4.5165265 15.9898617 2137.01230
Standard NEH -8.489846 0.0000000 0.4352138 1.6293360 28.46078
Global-best NEH -9.913169 0.0000000 0.4488229 1.4098469 14.31729
DT -8.911596 0.0000000 0.1750807 0.8874725 13.32882
DT+Dependencies -8.565194 0.0000000 0.1765974 0.8709982 10.72643
RF -8.489846 0.0000000 0.1549657 0.8165678 24.01786
RF+Dependencies -8.367690 -0.0083383 0.0975155 0.6451000 16.71320

Below is the violin plot for each model performance.

And filtering the random choice performance:

The Friedman test was used on the optimization performance data considering each instance as a block. The table below shows the test p-values adjusted with Nemenyi post-hoc:

Random Standard NEH Global-best NEH DT DT+Dependencies RF
Standard NEH 0 NA NA NA NA NA
Global-best NEH 0 0.3105267 NA NA NA NA
DT 0 0.0000000 0 NA NA NA
DT+Dependencies 0 0.0000000 0 0.9995260 NA NA
RF 0 0.0000000 0 0.9999419 0.9908703 NA
RF+Dependencies 0 0.0000000 0 0.0004696 0.0000655 0.0016537

The following test compares RF with dependencies model (the one with the best performance) against all other models considering the optimization performance. It uses the many-to-one Friedman test with Demsar post-hoc.

RF+Dependencies
Random 0.00e+00
Standard NEH 0.00e+00
Global-best NEH 0.00e+00
DT 4.70e-05
DT+Dependencies 9.60e-06
RF 8.53e-05